114 research outputs found

    Fully leakage-resilient signatures revisited: Graceful degradation, noisy leakage, and construction in the bounded-retrieval model

    Get PDF
    We construct new leakage-resilient signature schemes. Our schemes remain unforgeable against an adversary leaking arbitrary (yet bounded) information on the entire state of the signer (sometimes known as fully leakage resilience), including the random coin tosses of the signing algorithm. The main feature of our constructions is that they offer a graceful degradation of security in situations where standard existential unforgeability is impossible

    Continuously non-malleable codes with split-state refresh

    Get PDF
    Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible. In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e., without any interaction), which allows to avoid the self-destruct mechanism. An additional feature of our security model is that it captures directly security against continual leakage attacks. We give an abstract framework for building such codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption. Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient RAM programs. In comparison to other tamper-resilient compilers, ours has several advantages, among which the fact that, for the first time, it does not rely on the self-destruct feature

    From Known-Plaintext Security to Chosen-Plaintext Security

    Get PDF
    We present a new encryption mode for block ciphers. The mode is efficient and is secure against chosen-plaintext attack (CPA) already if the underlying symmetric cipher is secure against known-plaintext attack (KPA). We prove that known (and widely used) encryption modes as CBC mode and counter mode do not have this property. In particular, we prove that CBC mode using a KPA secure cipher is KPA secure, but need not be CPA secure, and we prove that counter mode using a KPA secure cipher need not be even KPA secure. The analysis is done in a concrete security framework

    Lower Bounds for Oblivious Data Structures

    Get PDF
    An oblivious data structure is a data structure where the memory access patterns reveals no information about the operations performed on it. Such data structures were introduced by Wang et al. [ACM SIGSAC'14] and are intended for situations where one wishes to store the data structure at an untrusted server. One way to obtain an oblivious data structure is simply to run a classic data structure on an oblivious RAM (ORAM). Until very recently, this resulted in an overhead of ω(lgn)\omega(\lg n) for the most natural setting of parameters. Moreover, a recent lower bound for ORAMs by Larsen and Nielsen [CRYPTO'18] show that they always incur an overhead of at least Ω(lgn)\Omega(\lg n) if used in a black box manner. To circumvent the ω(lgn)\omega(\lg n) overhead, researchers have instead studied classic data structure problems more directly and have obtained efficient solutions for many such problems such as stacks, queues, deques, priority queues and search trees. However, none of these data structures process operations faster than Θ(lgn)\Theta(\lg n), leaving open the question of whether even faster solutions exist. In this paper, we rule out this possibility by proving Ω(lgn)\Omega(\lg n) lower bounds for oblivious stacks, queues, deques, priority queues and search trees.Comment: To appear at SODA'1

    On the Number of Synchronous Rounds Required for Byzantine Agreement

    Get PDF
    Byzantine agreement is typically considered with respect to either a fully synchronous network or a fully asynchronous one. In the synchronous case, either t+1t+1 deterministic rounds are necessary in order to achieve Byzantine agreement or at least some expected large constant number of rounds. In this paper we examine the question of how many initial synchronous rounds are required for Byzantine agreement if we allow to switch to asynchronous operation afterwards. Let n=h+tn=h+t be the number of parties where hh are honest and tt are corrupted. As the main result we show that, in the model with a public-key infrastructure and signatures, d+O(1)d+O(1) deterministic synchronous rounds are sufficient where dd is the minimal integer such that nd>3(td)n-d>3(t-d). This improves over the t+1t+1 necessary deterministic rounds for almost all cases, and over the exact expected number of rounds in the non-deterministic case for many cases

    Early Stopping for Any Number of Corruptions

    Get PDF
    Minimizing the round complexity of byzantine broadcast is a fundamental question in distributed computing and cryptography. In this work, we present the first early stopping byzantine broadcast protocol that tolerates up to t=n1t=n-1 malicious corruptions and terminates in O(min{f2,t+1})O(\min\{f^2,t+1\}) rounds for any execution with ftf\leq t actual corruptions. Our protocol is deterministic, adaptively secure, and works assuming a plain public key infrastructure. Prior early-stopping protocols all either require honest majority or tolerate only up to t=(1ϵ)nt=(1-\epsilon)n malicious corruptions while requiring either trusted setup or strong number theoretic hardness assumptions. As our key contribution, we show a novel tool called a polariser that allows us to transfer certificate-based strategies from the honest majority setting to settings with a dishonest majority

    An Efficient Pseudo-Random Generator with Applications to Public-Key Encryption and Constant-Round Multiparty Computation

    Get PDF
    We present a pseudo-random bit generator expanding a uniformly random bit-string r of length k/2, where k is the security parameter, into a pseudo-random bit-string of length 2k − log^2(k) using one modular exponentiation. In contrast to all previous high expansion-rate pseudo-random bit generators, no hashing is necessary. The security of the generator is proved relative to Paillier’s composite degree residuosity assumption. As a first application of our pseudo-random bit generator we exploit its efficiency to optimise Paillier’s crypto-system by a factor of (at least) 2 in both running time and usage of random bits. We then exploit the algebraic properties of the generator to construct an efficient protocol for secure constant-round multiparty function evaluation in the cryptographic setting. This construction gives an improvement in communication complexity over previous protocols in the order of nk^2, where n is the number of participants and k is the security parameter, resulting in a communication complexity of O(nk^2|C|) bits, where C is a Boolean circuit computing the function in question

    Fully Leakage-Resilient Codes

    Get PDF
    Leakage resilient codes (LRCs) are probabilistic encoding schemes that guarantee message hiding even under some bounded leakage on the codeword. We introduce the notion of \emph{fully} leakage resilient codes (FLRCs), where the adversary can leak some λ0\lambda_0 bits from the encoding process, i.e., the message and the randomness involved during the encoding process. In addition the adversary can as usual leak from the codeword. We give a simulation-based definition requiring that the adversary\u27s leakage from the encoding process and the codework can be simulated given just λ0\lambda_0 bits of leakage from the message. For λ0=0\lambda_0 = 0 our new simulation-based notion is equivalent to the usual game-based definition. A FLRC would be interesting in its own right and would be useful in building other leakage-resilient primitives in a composable manner. We give a fairly general impossibility result for FLRCs in the popular split-state model, where the codeword is broken into independent parts and where the leakage occurs independently on the parts. We show that if the leakage is allowed to be any poly-time function of the secret and if collision-resistant hash functions exist, then there is no FLRC for the split-state model. The result holds only when the message length can be linear in the security parameter. However, we can extend the impossibility result to FLRCs for constant-length messages under assumptions related to differing-input obfuscation. These results show that it is highly unlikely that we can build FLRCs for the split-state model when the leakage can be any poly-time function of the secret state. We then give two feasibility results for weaker models. First, we show that for \NC^0-bounded leakage from the randomness and arbitrary poly-time leakage from the parts of the codeword the inner-product construction proposed by Daví \etal (SCN\u2710) and successively improved by Dziembowski and Faust (ASIACRYPT\u2711) is a FLRC for the split-state model. Second, we provide a compiler from any LRC to a FLRC in the common reference string model for any fixed leakage family of small cardinality. In particular, this compiler applies to the split-state model but also to many other models

    Invisible Adaptive Attacks

    Get PDF
    We introduce the concept of an \emph{invisible adaptive attack} (IAA) against cryptographic protocols. Or rather, it is a class of attacks, where the protocol itself is the attack, and where this cannot be seen by the security model. As an example, assume that we have some cryptographic security \emph{model} MM and assume that we have a current setting of the \emph{real world} with some cryptographic infrastructure in place, like a PKI. Select some object from this real world infrastructure, like the public key, pk0pk_0, of some root certificate authority (CA). Now design a protocol π\pi, which is secure in MM. Then massage it into π^\hat{\pi}, which runs exactly like π\pi, except that if the public key pkpk of the root CA happens to be pk0pk_0, then it will be completely insecure. Of course π^\hat{\pi} should be considered insecure. However, in current security models existing infrastructure is modelled by generating it at random in the experiment defining security. Therefore, \emph{in the model}, the root CA will have a fresh, random public key pkpk. Hence pkpk0pk \ne pk_0, except with negligible probability, and thus MM will typically deem π^\hat{\pi} secure. The problem is that to notice the above attack in a security model, we need to properly model the correlation between π^\hat{\pi} and pkpk. However, this correlation was made by the \emph{adversary} and it is naïve to believe that he will report this correlation correctly to the security model. It is the protocol itself and how to model it which is the attack. Furthermore, since a model cannot see a real world object, like the current infrastructure , the correlation is invisible to the model when not reported by the adversary. Besides introducing the new concept of an invisible adaptive attack, we have the following contributions: \begin{enumerate} \item We show that a popular security model, the generalized universal composability (GUC) model introduced by Canetti, Dodis, Pass and Walfish in 2007\cite{CDPW07GUC}, allows an IAA, along the lines of the attack sketched above. This is not a problem specific to the GUC model, but it is more interesting to demonstrate this for the GUC model, as it was exactly developed to model security for protocols running with a common infrastructure which has been set up once and for all before the protocols are run. \item We show how to modify the GUC model to catch invisible adaptive attacks relative to existing infrastructure, introducing the \emph{strong externalized universal composability (SEUC)} model. Conceptually, when given a protocol to analyse, we will assume the \emph{worst case correlation} to the existing infrastructure, and we will deem it secure if it is secure in presence of this worst case correlation. I.e., a protocol is deemed insecure if there could \emph{exist} an IAA which is using the given protocol. We consider this new way to define security a main conceptual contribution of the paper. Properly modelling this conceptual idea is technical challenging and requires completely novel ideas. We consider this the main technical contribution of the paper. We prove that the new model has secure modular composition as the UC and the GUC model. \item We show that in the SEUC model any well-formed ideal functionality can be realised securely under standard computational assumptions and using an infrastructure, or setup assumption, known as an augmented common reference string. We do that by slightly modifying a protocol from \cite{CDPW07GUC} and reproving its security in the SEUC model. \end{enumerate} Our techniques seem specific to modelling IAAs relative to \emph{existing infrastructure}. One can, however, imagine more general IAAs, relative, for instance, to values being dynamically generated by secure protocols currently running in practice, like a broadcast service or a cloud service. We do not know how to model IAAs in general and hence open up a new venue of investigation

    LEGO for Two Party Secure Computation

    Get PDF
    The first and still most popular solution for secure two-party computation relies on Yao\u27s garbled circuits. Unfortunately, Yao\u27s construction provide security only against passive adversaries. Several constructions (zero-knowledge compiler, cut-and-choose) are known in order to provide security against active adversaries, but most of them are not efficient enough to be considered practical. In this paper we propose a new approach called LEGO (Large Efficient Garbled-circuit Optimization) for two-party computation, which allows to construct more efficient protocols secure against active adversaries. The basic idea is the following: Alice constructs and provides Bob a set of garbled NAND gates. A fraction of them is checked by Alice giving Bob the randomness used to construct them. When the check goes through, with overwhelming probability there are very few bad gates among the non-checked gates. These gates Bob permutes and connects to a Yao circuit, according to a fault-tolerant circuit design which computes the desired function even in the presence of a few random faulty gates. Finally he evaluates this Yao circuit in the usual way. For large circuits, our protocol offers better performance than any other existing protocol. The protocol is universally composable (UC) in the OT-hybrid model
    corecore